54 research outputs found

    Targets for a Comparative Neurobiology of Language

    Get PDF
    One longstanding impediment to progress in understanding the neural basis of language is the development of model systems that retain language-relevant cognitive behaviors yet permit invasive cellular neuroscience methods. Recent experiments in songbirds suggest that this group may be developed into a powerful animal model, particularly for components of grammatical processing. It remains unknown, however, what a neuroscience of language perception may look like when instantiated at the cellular or network level. Here we deconstruct language perception into a minimal set of cognitive processes necessary to support grammatical processing. We then review the current state of our understanding about the neural mechanisms of these requisite cognitive processes in songbirds. We note where current knowledge is lacking, and suggest how these mechanisms may ultimately combine to support an emergent mechanism capable of processing grammatical structures of differing complexity

    Parametric UMAP embeddings for representation and semi-supervised learning

    Full text link
    UMAP is a non-parametric graph-based dimensionality reduction algorithm using applied Riemannian geometry and algebraic topology to find low-dimensional embeddings of structured data. The UMAP algorithm consists of two steps: (1) Compute a graphical representation of a dataset (fuzzy simplicial complex), and (2) Through stochastic gradient descent, optimize a low-dimensional embedding of the graph. Here, we extend the second step of UMAP to a parametric optimization over neural network weights, learning a parametric relationship between data and embedding. We first demonstrate that Parametric UMAP performs comparably to its non-parametric counterpart while conferring the benefit of a learned parametric mapping (e.g. fast online embeddings for new data). We then explore UMAP as a regularization, constraining the latent distribution of autoencoders, parametrically varying global structure preservation, and improving classifier accuracy for semi-supervised learning by capturing structure in unlabeled data. Google Colab walkthrough: https://colab.research.google.com/drive/1WkXVZ5pnMrm17m0YgmtoNjM_XHdnE5Vp?usp=sharin

    Neurally driven synthesis of learned, complex vocalizations

    Get PDF
    Brain machine interfaces (BMIs) hold promise to restore impaired motor function and serve as powerful tools to study learned motor skill. While limb-based motor prosthetic systems have leveraged nonhuman primates as an important animal model,1–4 speech prostheses lack a similar animal model and are more limited in terms of neural interface technology, brain coverage, and behavioral study design.5–7 Songbirds are an attractive model for learned complex vocal behavior. Birdsong shares a number of unique similarities with human speech,8–10 and its study has yielded general insight into multiple mechanisms and circuits behind learning, execution, and maintenance of vocal motor skill.11–18 In addition, the biomechanics of song production bear similarity to those of humans and some nonhuman primates.19–23 Here, we demonstrate a vocal synthesizer for birdsong, realized by mapping neural population activity recorded from electrode arrays implanted in the premotor nucleus HVC onto low-dimensional compressed representations of song, using simple computational methods that are implementable in real time. Using a generative biomechanical model of the vocal organ (syrinx) as the low-dimensional target for these mappings allows for the synthesis of vocalizations that match the bird's own song. These results provide proof of concept that high-dimensional, complex natural behaviors can be directly synthesized from ongoing neural activity. This may inspire similar approaches to prosthetics in other species by exploiting knowledge of the peripheral systems and the temporal structure of their output.Fil: Arneodo, Ezequiel Matías. University of California; Estados Unidos. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - La Plata. Instituto de Física La Plata. Universidad Nacional de La Plata. Facultad de Ciencias Exactas. Instituto de Física La Plata; ArgentinaFil: Chen, Shukai. University of California; Estados UnidosFil: Brown, Daril E.. University of California; Estados UnidosFil: Gilja, Vikash. University of California; Estados UnidosFil: Gentner, Timothy Q.. The Kavli Institute For Brain And Mind; Estados Unidos. University of California; Estados Unido

    Local field potentials in a pre-motor region predict learned vocal sequences

    Get PDF
    Neuronal activity within the premotor region HVC is tightly synchronized to, and crucial for, the articulate production of learned song in birds. Characterizations of this neural activity detail patterns of sequential bursting in small, carefully identified subsets of neurons in the HVC population. The dynamics of HVC are well described by these characterizations, but have not been verified beyond this scale of measurement. There is a rich history of using local field potentials (LFP) to extract information about behavior that extends beyond the contribution of individual cells. These signals have the advantage of being stable over longer periods of time, and they have been used to study and decode human speech and other complex motor behaviors. Here we characterize LFP signals presumptively from the HVC of freely behaving male zebra finches during song production to determine if population activity may yield similar insights into the mechanisms underlying complex motor-vocal behavior. Following an initial observation that structured changes in the LFP were distinct to all vocalizations during song, we show that it is possible to extract time-varying features from multiple frequency bands to decode the identity of specific vocalization elements (syllables) and to predict their temporal onsets within the motif. This demonstrates the utility of LFP for studying vocal behavior in songbirds. Surprisingly, the time frequency structure of HVC LFP is qualitatively similar to well-established oscillations found in both human and non-human mammalian motor areas. This physiological similarity, despite distinct anatomical structures, may give insight into common computational principles for learning and/or generating complex motor-vocal behaviors.Fil: Brown, Daril E.. University of California at San Diego; Estados UnidosFil: Chavez, Jairo I.. University of California at San Diego; Estados UnidosFil: Nguyen, Derek H.. University of California at San Diego; Estados UnidosFil: Kadwory, Adam. University of California at San Diego; Estados UnidosFil: Voytek, Bradley. University of California at San Diego; Estados UnidosFil: Arneodo, Ezequiel Matías. University of California at San Diego; Estados Unidos. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - La Plata. Instituto de Física La Plata. Universidad Nacional de La Plata. Facultad de Ciencias Exactas. Instituto de Física La Plata; ArgentinaFil: Gentner, Timothy Q.. University of California at San Diego; Estados UnidosFil: Gilja, Vikash. University of California at San Diego; Estados Unido

    Surprising Twist on Auditory Representation

    No full text

    Pattern-Induced Covert Category Learning in Songbirds

    Get PDF
    Language is uniquely human, but its acquisition may involve cognitive capacities shared with other species. During development, language experience alters speech sound (phoneme) categorization. Newborn infants distinguish the phonemes in all languages but by 10 months show adult-like greater sensitivity to native language phonemic contrasts than non-native contrasts. Distributional theories account for phonetic learning by positing that infants infer category boundaries from modal distributions of speech sounds along acoustic continua. For example, tokens of the sounds /b/ and /p/ cluster around different mean voice onset times. To disambiguate overlapping distributions, contextual theories propose that phonetic category learning is informed by higher-level patterns (e.g., words) in which phonemes normally occur. For example, the vowel sounds /Ι/ and /e/ can occupy similar perceptual spaces but can be distinguished in the context of "with" and "well." Both distributional and contextual cues appear to function in speech acquisition. Non-human species also benefit from distributional cues for category learning, but whether category learning benefits from contextual information in non-human animals is unknown. The use of higher-level patterns to guide lower-level category learning may reflect uniquely human capacities tied to language acquisition or more general learning abilities reflecting shared neurobiological mechanisms. Using songbirds, European starlings, we show that higher-level pattern learning covertly enhances categorization of the natural communication sounds. This observation mirrors the support for contextual theories of phonemic category learning in humans and demonstrates a general form of learning not unique to humans or language
    corecore